Goto

Collaborating Authors

 AAAI AI-Alert for Aug 4, 2020


Knowledge Graphs And AI: Interview With Chaitan Baru, University Of California San Diego (UCSD)

AITopics Custom Links

One of the challenges with modern machine learning systems is that they are very heavily dependent on large quantities of data to make them work well. This is especially the case with deep neural nets, where lots of layers means lots of neural connections which requires large amounts of data and training to get to the point where the system can provide results at acceptable levels of accuracy and precision. Indeed, the ultimate implementation of this massive data, massive network vision is the currently much-vaunted Open AI GPT-3, which is so large that it can predict and generate almost any text with surprising magical wizardry. However, in many ways, GPT-3 is still a big data magic trick. Indeed, Professor Luis Perez-Breva makes this exact point when he says that what we call machine learning isn't really learning at all.


Introducing 'The AI & Machine Learning Imperative'

#artificialintelligence

Leading organizations recognize the potential for artificial intelligence and machine learning to transform work and society. The technologies offer companies strategic new opportunities and integrate into a range of business processes -- customer service, operations, prediction, and decision-making -- in scalable, adaptable ways. As with other major waves of technology, AI requires organizations and managers to shed old ways of thinking and grow with new skills and capabilities. "The AI & Machine Learning Imperative," an Executive Guide from MIT SMR, offers new insights from leading academics and practitioners in data science and AI. The guide explores how managers and companies can overcome challenges and identify opportunities across three key pillars: talent, leadership, and organizational strategy.


So many stars, so little time: Machine learning helps astroboffins spot the most oxygen-starved galaxy yet

#artificialintelligence

Astronomers have spied a tiny galaxy with the lowest oxygen levels yet observed, a discovery made possible thanks to a machine-learning algorithm. The galaxy, dubbed HSC J1631 4426, contains just 1.6 per cent of the total amount of oxygen locked in our Sun – the lowest levels yet seen, beating the previous record by just a smidgen. These extremely metal-poor galaxies are rare; they tend to be small, formless dwarf galaxies that contain a small smattering of stars. The lack of heavier elements such as oxygen is a sign that the galaxy is still in its primordial stage. Elements heavier than hydrogen and helium like carbon, oxygen, and all the way up to iron can only be created by later generations of stars.


Machine-learning test may improve kidney failure prediction in patients with diabetes

#artificialintelligence

For patients with type 2 diabetes or the APOL1-HR genotype, a machine learning test integrating biomarkers and electronic health record data demonstrated improved prediction of kidney failure compared with commonly used clinical models. According to Kinsuk Chauhan, MD, MPH, of Icahn School of Medicine at Mount Sinai, and colleagues, diabetic kidney disease from type 2 diabetes accounts for 44% of all patients with end-stage kidney disease, with the APOL1 high-risk genotypes also associated with increased risk for chronic kidney disease progression and eGFR decline that may ultimately result in kidney failure. "Even though these populations are on average higher risk than the general population, accurate prediction of who will have rapid kidney function decline (RKFD) and worse kidney outcomes is lacking," the researchers wrote, noting that the current standard of using the kidney failure risk equation to predict ESKD has only been validated in patients who already have kidney disease and not in those with preserved kidney function at baseline. "Widespread electronic health records (EHR) usage provides the potential to leverage thousands of clinical features," the researchers added. "Standard statistical approaches are inadequate to leverage this data due to feature volume, unaligned nature of data and correlation structure."


Applying Linearly Scalable Transformers to Model Longer Protein Sequences

#artificialintelligence

In a bid to make transformer models even better for real-world applications, researchers from Google, University of Cambridge, DeepMind and Alan Turing Institute have proposed a new transformer architecture called "Performer" -- based on what they call fast attention via orthogonal random features (FAVOR). Believed to be particularly well suited for language understanding tasks when proposed in 2017, transformer is a novel neural network architecture based on a self-attention mechanism. To date, in addition to achieving SOTA performance in Natural Language Processing and Neural Machine Translation tasks, transformer models have also performed well across other machine learning (ML) tasks such as document generation/summarization, time series prediction, image generation, and analysis of biological sequences. Neural networks usually process language by generating fixed- or variable-length vector-space representations. A transformer however only performs a small, constant number of steps -- in each step, it applies a self-attention mechanism that can directly model relationships between all words in a sentence, regardless of their respective position.


GPT-3: an AI game-changer or an environmental disaster? John Naughton

The Guardian > Technology

Unless you've been holidaying on Mars, or perhaps in Spain (alongside the transport secretary), you may have noticed some fuss on social media about something called GPT-3. The GPT bit stands for the "generative pre-training" of a language model that acquires knowledge of the world by "reading" enormous quantities of written text. The "3" indicates that this is the third generation of the system. GPT-3 is a product of OpenAI, an artificial intelligence research lab based in San Francisco. In essence, it's a machine-learning system that has been fed (trained on) 45 terabytes of text data. Given that a terabyte (TB) is a trillion bytes, that's quite a lot.


OpenAI's latest breakthrough is astonishingly powerful, but still fighting its flaws

#artificialintelligence

The most exciting new arrival in the world of AI looks, on the surface, disarmingly simple. It's not some subtle game-playing program that can outthink humanity's finest or a mechanically advanced robot that backflips like an Olympian. You start typing and it predicts what comes next. But while this sounds simple, it's an invention that could end up defining the decade to come. The program itself is called GPT-3 and it's the work of San Francisco-based AI lab OpenAI, an outfit that was founded with the ambitious (some say delusional) goal of steering the development of artificial general intelligence or AGI: computer programs that possess all the depth, variety, and flexibility of the human mind. For some observers, GPT-3 -- while very definitely not AGI -- could well be the first step toward creating this sort of intelligence.


Chinese AI Is Creating an Axis of Autocracy

The Atlantic - Technology

After clearing the institute's security, I was told to wait in a lobby monitored by cameras. On its walls were posters of China's most consequential postwar leaders. He looked serene, as though satisfied with having freed China from the Western yoke. Next to him was a fuzzy black-and-white shot of Deng Xiaoping visiting the institute in his later years, after his economic reforms had set China on a course to reclaim its traditional global role as a great power. The lobby's most prominent poster depicted Xi Jinping in a crisp black suit.


Facebook develops AI algorithm that learns to play poker on the fly

#artificialintelligence

Facebook researchers have developed a general AI framework called Recursive Belief-based Learning (ReBeL) that they say achieves better-than-human performance in heads-up, no-limit Texas hold'em poker while using less domain knowledge than any prior poker AI. They assert that ReBeL is a step toward developing universal techniques for multi-agent interactions -- in other words, general algorithms that can be deployed in large-scale, multi-agent settings. Potential applications run the gamut from auctions, negotiations, and cybersecurity to self-driving cars and trucks. Combining reinforcement learning with search at AI model training and test time has led to a number of advances. Reinforcement learning is where agents learn to achieve goals by maximizing rewards, while search is the process of navigating from a start to a goal state.


Deep learning‐based methods for individual recognition in small birds

#artificialintelligence

Individual identification is a crucial step to answer many questions in evolutionary biology and is mostly performed by marking animals with tags. Such methods are well‐established, but often make data collection and analyses time‐consuming, or limit the contexts in which data can be collected. Recent computational advances, specifically deep learning, can help overcome the limitations of collecting large‐scale data across contexts. However, one of the bottlenecks preventing the application of deep learning for individual identification is the need to collect and identify hundreds to thousands of individually labelled pictures to train convolutional neural networks (CNNs). Here we describe procedures for automating the collection of training data, generating training datasets, and training CNNs to allow identification of individual birds.